Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
1.
International Journal of Education and Literacy Studies ; 10(1):26-35, 2022.
Article in English | ProQuest Central | ID: covidwho-1824288

ABSTRACT

This study was carried out with the aim of determining the opinions and digital literacy status of the students who are preparing for the music talent exams of the universities during the COVID-19 process. The research is in the survey model, which is one of the quantitative research methods and is limited to 300 students who took the music department and department aptitude exams of 21 different universities. The type of high school from which the relevant students graduated, the universities they applied to for special talent exams, their previous undergraduate education, their status of researching distance education opportunities, and digital literacy levels of the universities they applied for special talent exams were included in the scope of the study. Within the scope of the problem situation of the research, the effects of the duration of the pandemic on the ear training, instrument training, voice training, and psychological state of the students were investigated. According to the data obtained from the interview form applied to the students, the pandemic process gave the students extra time for ear training, instrument training, and voice training, but this extra time could not be properly evaluated because there was no educator guidance. It was concluded that students felt inadequate about digital literacy.

2.
22nd International Conference on Artificial Intelligence in Education, AIED 2021 ; 12749 LNAI:446-450, 2021.
Article in English | Scopus | ID: covidwho-1767421

ABSTRACT

The inevitable shift towards online learning due to the emergence of the COVID-19 pandemic triggered a strong need to assess students using shorter exams whilst ensuring reliability. This study explores a data-centric approach that utilizes feature importance to select a discriminative subset of questions from the original exam. Furthermore, the discriminative question subset’s ability to approximate the students exam scores is evaluated by measuring the prediction accuracy and by quantifying the error interval of the prediction. The approach was evaluated using two real-world exam datasets of the Scholastic Aptitude Test (SAT) and Exame Nacional do Ensino Médio (ENEM) exams, which consist of student response data and the corresponding the exam scores. The evaluation was conducted against randomized question subsets of sizes 10, 20, 30 and 50. The results show that our method estimates the full scores more accurately than a baseline model in most question sizes while maintaining a reasonable error interval. The encouraging evidence found in this paper provides support for the strong potential of the on-going study to provide a data-centric approach for exam size reduction. © 2021, Springer Nature Switzerland AG.

SELECTION OF CITATIONS
SEARCH DETAIL